In order to improve the road traffic safety situation, many domestic and foreign scientific research institutions and automobile companies have invested a lot of energy in the research and development of automotive safety protection systems. The research and development content has evolved from the earliest mechanical and electronic devices to the hot spot of today's attention--the advanced assisted driving system (ADAS). The system represented by ADAS applies various sensors on the hardware, such as ultrasonic sensors, vision sensors, radar, GPS, etc., to sense the vehicle's own state and environmental changes during the driving process, and collect vehicle data and environmental data. Based on these data, Conduct traffic scene identification, traffic incident prediction, and give corresponding driving suggestions and emergency measures to assist drivers to make decisions, avoid traffic accidents, and reduce the damage caused by accidents. In the actual driving process, most of the information obtained by the driver comes from vision, such as: road conditions, traffic signs, markings and signals, obstacles, etc. Research shows that about 90% of environmental information comes from vision, if Good use of visual sensors to understand the road environment is a good choice for vehicle intelligence. Vehicle driving assistance system based on visual navigation for traffic sign detection, road detection, pedestrian detection and obstacle detection can reduce the driver's labor intensity, improve driving safety and reduce traffic accidents. The driver assistance system uses a large amount of visual information data in the process of providing decision-making advice to the driver. In this respect, visual images have incomparable advantages: The visual image contains a large amount of information, such as distance information of objects within the visible range, object shape, texture, and color; The acquisition of visual information is non-contact, does not damage the road surface and the surrounding environment, and does not require extensive construction of existing road facilities; The acquisition of a visual image can simultaneously achieve road inspection, traffic sign detection, obstacle detection and many other tasks; There is no interference between vehicles during the acquisition of visual information. In summary, intelligent vehicle machine vision technology has broad application prospects in intelligent transportation, vehicle safety assisted driving, and automatic driving of vehicles. 1. Application of machine vision in advanced assisted driving systems At present, visual sensors and machine vision technology are widely used in various advanced assisted driving systems. Among them, the perception of the driving environment is one of the important components of the advanced assisted driving system based on machine vision. The perception of the driving environment mainly relies on visual technology to perceive the road information, road condition information and driver status when the vehicle is driving, and provides the basic data necessary for the decision-making of the assisted driving system. among them, Road information mainly refers to static information outside the vehicle, including: lane lines, road edges, traffic signs and signal lights; The traffic information mainly refers to the dynamic information outside the vehicle, including: obstacles in front of the road, pedestrians, vehicles, etc.; The driver's status belongs to the in-vehicle information, which mainly includes: driver's fatigue, abnormal driving behavior, etc., to avoid the safety accident caused by the vehicle by reminding the driver of unsafe behavior. Using machine vision technology to perceive the driving environment, you can obtain static information and dynamic information inside and outside the vehicle to help the driver assistance system make decision-making judgments. According to the above classification, it can be seen that the key technologies of the advanced machine-assisted advanced assisted driving system currently include: lane line detection technology, traffic sign recognition technology, vehicle identification technology, pedestrian detection technology and driver state detection technology. 1.1 lane line detection technology At present, the research results of lane line detection technology mainly involve two aspects of equipment and algorithms. The data acquisition of lane line detection technology is based on different sensor devices such as laser radar, stereo vision, monocular vision and the like. For the collected information, it is necessary to match suitable algorithms, such as model-based methods and feature-based methods for calculation and decision making. The principle of machine vision of lidar is to identify roads by different colors or materials with different reflectivity characteristics; Stereo vision is more accurate than laser radar, but it is difficult to achieve image matching, equipment cost is high, and due to the complexity of the algorithm, the real-time performance is poor; Monocular vision is mainly implemented in the application based on features, models, fusion and machine learning methods, and is currently the most mainstream method for lane line recognition. The feature-based algorithm first performs image feature extraction, such as edge information. Using these feature information, lane line markings are obtained in accordance with predetermined rules. For example, Lee et al. proposed a feature-based lane line detection method in 2002. They used the edge distribution function to calculate the global gradient angle cumulative differentiation to find the maximum cumulant, combined with the symmetry characteristics of the left and right lane lines. The location of the lane line. The main advantage of this kind of algorithm is that it is not sensitive to the shape of the lane line. In the case of strong noise interference (such as shadow, mark line wear, etc.), it still has good robustness and can detect the lane line more reliably. Straight line model. In 2010, Lopez et al. proposed a method of extracting lane line feature data using image "ridges" instead of image edge information. The "ridge" can reflect the degree of convergence of the pixels in the neighborhood of the image. In the lane line marker area, its representation is a bright area with a local maximum in the middle of the lane line. Compared with the edges of the image, the "ridge" is more suitable for applications in lane detection. The model-based lane line recognition method uses mathematical thinking to establish a road model and analyzes image information acquisition parameters to complete lane line detection. Shengyan Zhou et al. proposed a lane recognition method based on Gabor filter and geometric model. Under the premise that there is a lane marking line in front of the smart car, it can be described by four parameters: the origin, width, curvature and starting position of the lane line. The camera is pre-calibrated, and several lane line models are selected after the model parameters are calculated. The algorithm estimates the required parameters by local Hough transform and region localization, determines the final use model and completes the matching with the actual lane line. Generally speaking, model-based lane recognition methods are mainly divided into simple linear models and more complex models (such as quadratic curves and spline curves). In practical applications, different methods need to be selected according to specific use occasions and road characteristics. . For example, most lane departure warning systems use simple linear models to describe lane lines. Where flexible lane lines are needed, such as lane line estimation and tracking problems, more complex model algorithms are often used. 1.2 Traffic Sign Recognition Technology Traffic sign recognition can prompt drivers of traffic signs in the road environment to help drivers make the right decisions and improve driving safety. Traffic signs usually have obvious visual features, such as color, shape, etc. These visual features can be used to detect different traffic signs. In the relevant literature research on traffic sign detection methods, the correlation detection of color features and shape features is combined. The method is more extensive. However, due to the actual situation, the quality of image acquisition data of traffic signs may be affected by illumination, weather changes, etc. At the same time, traffic signs are blocked, distorted, worn, etc., which may affect the accuracy of the algorithm. At present, most of the implementation methods of traffic sign recognition technology realize image segmentation by setting a threshold range of color components, obtain a region of interest (ROI) from a complex background region, and then perform shape filtering on the region of interest. Thereby detecting the area where the traffic sign is located. The common algorithm has a direct color threshold segmentation algorithm, which directly divides all pixels of the image in the RGB color space, and determines whether there is a traffic sign in the target area through corner detection. The algorithm has poor solution to the illumination influence and occlusion problem, so many Scholars have improved the algorithm. It is commonly used to convert RGB images into HSV, HIS and other color models that are more in line with human visual understanding of color, and then image segmentation and extraction, effectively overcoming the illumination effects of traffic signs and Block the puzzle. The most representative application of traffic sign recognition technology is in the Intelligent Transportation System (ITS). In 2010, the TSR system developed by the University of Massachusetts in the United States uses color threshold segmentation algorithm and principal component analysis method for target detection and recognition. The recognition accuracy of the system is as high as 99.2%, for slight target occlusion and low visibility weather. In this case, the algorithm can achieve good results, with certain robustness and applicability. The processing speed is 2.5 s per frame. The main shortcoming of the system is that it is difficult to meet the real-time requirements. In 2011, Germany organized the Traffic Sign Recognition Competition (IJCNN 2011), which promoted the rapid development of traffic sign detection and identification research. In 2011, Ciresan et al. used the deep convolutional neural network recognition method for the GTSRB database in the IJCNN competition, and obtained higher results than the average human recognition rate. In 2012, Greenhalghd et al. selected the maximum values ​​of R and B channels in the normalized RGB space and extracted the MSER region with RGB images and used SVM for traffic sign judgment. This method has better real-time performance. In 2013, Kim JB believed that the color shape was easily affected by the surrounding environment, and increased the visual saliency model for traffic sign detection and high real-time performance. 1.3 Vehicle Identification Technology In terms of vehicle identification technology, many experts and scholars are currently studying multi-sensor fusion technology. This is because a single sensor is more difficult to detect a vehicle in a complex traffic environment, and different vehicles have different shapes, sizes, and colors. In the context of occlusion, clutter, and dynamic changes between objects, multi-sensor fusion It can achieve the complementary effect and is the development trend of vehicle identification technology. Radar has obvious advantages in detecting the position, speed and depth of obstacles in front of the vehicle. The types mainly include laser radar, millimeter radar and microwave radar. Lidar can be divided into single line, four line and multi line. Based on the visual information of the on-board camera, stereoscopic or monocular vision can be detected in the external environment. The purpose of stereo vision detection is to obtain the depth information of obstacles. However, in practical applications, the large calculation amount is difficult to guarantee the real-time performance in high-speed driving, and the calibration parameters of binocular or multi-view cameras are often due to the influence of vehicle bumps and the like. There will be large deviations, resulting in more false detections and missed inspections. Monocular vision has great advantages in real-time. It is the most commonly used detection method, including: detection methods based on prior knowledge, motion-based detection methods, and statistical learning-based detection methods. Detection method based on prior knowledge: extracting certain features of the vehicle as prior knowledge, the principle is similar to the feature-based detection algorithm in lane detection technology, and the vehicle characteristics often used as prior knowledge include: vehicle symmetry, color, shadow , edge features, texture features and other information. The method searches in the image space to find the area that matches the prior knowledge model, ie the area of ​​the vehicle (ROI) that may be present. The identified ROI area is usually further confirmed by machine learning. Motion-based detection method: Because the image information generated when the object moves in different actual environments is different, based on this feature, it is usually necessary to process a plurality of images with large differences, and accumulate sufficient information to identify the moving object. Achieve the detection of obstacles. However, due to the limitation of the large amount of calculation, this method has poor real-time performance in practical applications. The motion-based detection method is mainly the optical flow method, which is one of the commonly used methods for detecting moving objects in machine vision and pattern recognition. It utilizes the change of the gray scale distribution of the image pixel sequence of the moving object in the same plane, and establishes The coordinate system detects and acquires the obstacle position. Detection method based on statistical learning: First, it is necessary to collect enough samples of the vehicle in front, and the sample needs to cover different environments, weather, distance and so on. In the process of training sample data, methods such as neural network and Haar wavelet are generally used. Once the training is complete, it can be applied to the specific functions to be implemented. 1.4 pedestrian detection technology Pedestrian detection technology has certain peculiarity compared with the current intelligent driving assistance technology. It is mainly reflected in the characteristics of pedestrians with both rigid and flexible objects. The detection of pedestrians is susceptible to pedestrian behavior, wearing and posture. Pedestrian detection technology, that is, the method of extracting the pedestrian position from the image collected by the sensor, and judging the behavior of the pedestrian movement, by extracting the information of the moving target area in the video, using the background subtraction method, the optical flow method, the frame difference method, etc. Judging the characteristics of human body shape and skin color. Among the acquired still pictures, the methods used mainly include template matching methods, based on shape detection methods, and machine learning based detection methods. Due to the obvious shortcomings of the first two methods, there are few practical applications in recent years. This paper focuses on the development status of detection methods based on machine learning. The performance improvement of the pedestrian learning method based on machine learning mainly depends on the description of pedestrians and the training of classifiers. The complexity of feature description affects the real-time performance of detection methods. HOG is a widely used method for pedestrian characterization. In addition, Haar, LBP and its improved methods are also commonly used methods for pedestrian characterization. The machine learning classifier involves the detection rate of pedestrian detection. The neural network, support vector machine and BoosTIng method are common machine learning classifiers. Many pedestrian detection techniques are based on the above methods and their improved methods, thus optimizing the pedestrian detection technology in different aspects. Taking HOG and linear vector machine (SVM) as an example, HOG characterizes the local gradient amplitude and direction features of the image, normalizes the feature vectors based on the gradient features, and allows the blocks to overlap each other. Illumination changes and small offsets are not sensitive and can effectively characterize the edges of the human body. HOG Features and SVM In a simple MIT pedestrian database test, the combined detection rate is nearly 100%. 1.5 Driver Status Detection Technology The method of early driver state detection is mainly based on the detection method of vehicle running state, including lane departure warning, steering wheel detection, etc. Such methods are not sensitive to the characteristics of the driver itself, and are easily misjudged by environmental factors, so in recent years There is very little single use in the research. This paper will introduce the detection technology based on the driver's facial features, and the driver state detection technology that combines this technology with multi-sensor. At present, in the detection technology based on the driver's facial features, the driver's head features are commonly used, and the visual features of the driver's head can focus on the driver's mental state, such as the eye's swaying state and frequency, and the mouth movement. Features, head postures, etc., all of which can be captured by the camera without affecting the driver's normal driving. This non-contact method has gradually become the mainstream method of such technology. FaceLAB is a representative of driver state detection technology based on eye features. It detects multi-feature information fusion by detecting characteristic parameters such as driver's head posture, eyelid movement, gaze direction and pupil diameter, and realizes fatigue state of the driver. Real-time detection, the system uses eye closure and gaze direction detection methods to solve the problem of gaze tracking under dark lighting, head movement and driver wearing glasses. In 2008, the latest version of the FaeLAB0 v4 system uses leading infrared active illumination technology to further enhance the accuracy and precision of line-of-sight detection and to track each eye independently. Based on the detection technology of driver facial features and multi-sensor fusion, the main representative is the project research of the European Union called "AWAKE", which uses images, pressure and other sensors to pass the driver's eye movement, line of sight, steering wheel The driving state such as gripping force, and the analysis of lane tracking, peripheral distance detection, accelerator accelerometer and brake use, the driver's fatigue degree is divided into three states: awake, possible fatigue and fatigue, and the driver's state is performed. More comprehensive testing and comprehensive evaluation. The driver's alarm system of the project consists of sound, visual and tactile alarms. When fatigue is detected, the driver's alertness can be improved by different sound and light stimulation and seat belt jitter depending on the degree of fatigue. Sex. Based on this research, Nissan developed an alarm system. When the system judges that the driver is in a fatigue driving state, the electronic alarm will sound and spray a kind of refreshing brain and mint on the cab. The aroma of the substance eliminates the driver's drowsiness in a timely manner. If the driver's fatigue state is not improved, the system will use an audible and visual alarm and automatically stop. 2. Conclusion The development of automotive technology has entered an intelligent era. Machine vision has been applied in many automotive driving assistance technologies. Technological advances in the field of machine vision will undoubtedly promote the development of automotive driving assistance technology. Therefore, the improvement of image acquisition quality, the optimization of image processing algorithms, how to achieve image intelligence generation, processing, recognition and decision making more quickly are important issues that need to be solved in the field of machine vision. In the future, with the technological innovation of various sensors and the complexity of image processing algorithms, machine vision technology will better meet the requirements of real-time and accuracy in the driving process. Guangzhou Chengwen Photoelectric Technology co.,ltd , https://www.cwstagelight.com